Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update of the abbreviations #206

Merged
merged 13 commits into from
Feb 11, 2025

Conversation

federicapicogna
Copy link
Contributor

No description provided.

Copy link
Owner

@koenderks koenderks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Didn't we settle on negative predictive rate parity? Why the change back to negative predictive value? I think you might not have reset your development branch after merging your previous pull request.
  2. What happens when you use metric = "eo" in model_fairness(). It should at least give a warning message or something that it is not possible.
  3. Can you make sure that the unit tests work again.
  4. Change "dp" back to "pp" since the proportional parity is equivalent to the disparate impact. "dp" is demographic parity and is calculated as (TP + FP)

Predicted changed into predictive,
confusion matrix inverted,
disparate impact is pp and not dp,
missing = error for eo
@koenderks
Copy link
Owner

Unit tests still fail:

Failure ('test-fairness-selection.R:25:3'): Validation of fairness selection ──
outcome$measure not equal to "dp".
1/1 mismatches
x[1]: "eo"
y[1]: "dp"

@koenderks koenderks self-requested a review December 17, 2024 14:34
Copy link
Owner

@koenderks koenderks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check the unit tests

@koenderks
Copy link
Owner

I think this still needs the error handing for metric="eo", right?

@koenderks
Copy link
Owner

This seems like a lot of empty space, can you put the legend closer to the diagram? Also, revert the changes in the stanmodels.R file and don't forget to implement error handling for metric = "eo" in model_fairness()

image

@federicapicogna
Copy link
Contributor Author

The strange thing about stanmodels is that when I run the lint test, it tells me that I have to change it this way or the test will fail. As for the rest, I fixed the legend and I'm now working on "eo".
Screenshot 2025-02-06 at 10 13 52

@koenderks
Copy link
Owner

Just implement a simple error message for when metric is "eo", we can handle the rest in JASP. Then undo all changes in stanmodels.R

Copy link
Owner

@koenderks koenderks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment above

Copy link
Owner

@koenderks koenderks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are lots of changes in the model_fairness.R file, but you only need to add an error message. Please make sure that the changes here are (or show up in the pull request as) minimal.

if (metric != "dp") {
if (metric == "eo") {
stop("Error: 'Equalized Odds' (EO) requires simultaneous evaluation of 'True Positive Rate' (TPR) and 'False Positive Rate' (FPR). You must first select tprp, then fprp, and ensure both fairness measures conclude no discrimination for both privileged and unprivileged groups before concluding no discrimination according to EO.")
} else {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the else here, you quit the analysis at this point anyway if metric = "eo"

@koenderks koenderks merged commit 6e4fb23 into koenderks:development Feb 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants